Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 6554, 2024 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-38503786

RESUMO

The integrated model of rumination argues that two trait factors-negative thinking habits and processing modes-get people stuck in maladaptive rumination. There is little evidence showing whether these factors influence the daily dynamic associations between rumination and negative moods. To address this, in this study, we conducted an experience-sampling method on Japanese university students. We recruited 92 Japanese university students and assessed their daily rumination and negative affect (NA) eight times a day for seven days. We examined the effects of habits and processing modes on the dynamic associations between rumination and negative moods using dynamic structural equation modeling. We found that individuals were more likely to ruminate when they experienced NA. However, contrary to previous findings, this study's participants did not experience NA after engaging in rumination. Moreover, we did not detect any significant trait factor effect on these dynamic associations. Our findings imply that individuals are more likely to engage in rumination after experiencing NA, but the reverse association, particularly the autoregression of rumination, may not be maintained in natural daily life. Furthermore, negative thinking habits and processing modes may not influence the daily dynamic associations between rumination and NA among Japanese university students.


Assuntos
Afeto , Pensamento , Humanos , Japão , Universidades , Hábitos , Estudantes
2.
Sci Rep ; 13(1): 21785, 2023 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-38066065

RESUMO

The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.


Assuntos
Emoções , Expressão Facial , Humanos , Movimento , Face , Bases de Dados Factuais
3.
PLoS One ; 17(7): e0271047, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35839208

RESUMO

Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.


Assuntos
Emoções , Expressão Facial , Ira , Face , Felicidade , Humanos
4.
Front Psychol ; 13: 849499, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35645906

RESUMO

Although results of many psychology studies have shown that sharing emotion achieves dyadic interaction, no report has explained a study of the transmission of authentic information from emotional expressions that can strengthen perceivers. For this study, we used computational modeling, which is a multinomial processing tree, for formal quantification of the process of sharing emotion that emphasizes the perception of authentic information for expressers' feeling states from facial expressions. Results indicated that the ability to perceive authentic information of feeling states from a happy expression has a higher probability than the probability of judging authentic information from anger expressions. Next, happy facial expressions can activate both emotional elicitation and sharing emotion in perceivers, where emotional elicitation alone is working rather than sharing emotion for angry facial expressions. Third, parameters to detect anger experiences were found to be correlated positively with those of happiness. No robust correlation was found between the parameters extracted from this experiment task and questionnaire-measured emotional contagion, empathy, and social anxiety. Results of this study revealed the possibility that a new computational approach contributes to description of emotion sharing processes.

5.
Front Psychol ; 12: 684249, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34434141

RESUMO

Facial expressions of emotion can convey information about the world and disambiguate elements of the environment, thus providing direction to other people's behavior. However, the functions of facial expressions from the perspective of learning patterns over time remain elusive. This study investigated how the feedback of facial expressions influences learning tasks in a context of ambiguity using the Iowa Gambling Task. The results revealed that the learning rate for facial expression feedback was slower in the middle of the learning period than it was for symbolic feedback. No difference was observed in deck selection or computational model parameters between the conditions, and no correlation was observed between task indicators and the results of depressive questionnaires.

6.
Sensors (Basel) ; 21(12)2021 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-34203007

RESUMO

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


Assuntos
Face , Expressão Facial , Bases de Dados Factuais
7.
Sci Rep ; 11(1): 3362, 2021 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-33564091

RESUMO

The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.


Assuntos
Emoções/fisiologia , Expressão Facial , Adulto , Feminino , Humanos , Masculino
8.
Front Psychol ; 12: 800657, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35185697

RESUMO

Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids' facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola's emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.

9.
Emotion ; 21(2): 447-451, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31829721

RESUMO

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification and may perform just as well as humans when tested in a cross-corpora context. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Inteligência Artificial/normas , Técnicas de Observação do Comportamento/métodos , Emoções/fisiologia , Expressão Facial , Reconhecimento Psicológico/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
10.
Behav Res Methods ; 53(2): 686-701, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-32804342

RESUMO

With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.


Assuntos
Emoções , Expressão Facial , Ira , Felicidade , Humanos , Reconhecimento Psicológico
11.
Behav Sci (Basel) ; 10(10)2020 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-33066229

RESUMO

Previous studies have reported that verbal sounds are associated-non-arbitrarily-with specific meanings (e.g., sound symbolism and onomatopoeia), including visual forms of information such as facial expressions; however, it remains unclear how mouth shapes used to utter each vowel create our semantic impressions. We asked 81 Japanese participants to evaluate mouth shapes associated with five Japanese vowels by using 10 five-item semantic differential scales. The results reveal that the physical characteristics of the facial expressions (mouth shapes) induced specific evaluations. For example, the mouth shape made to voice the vowel "a" was the one with the biggest, widest, and highest facial components compared to other mouth shapes, and people perceived words containing that vowel sound as bigger. The mouth shapes used to pronounce the vowel "i" were perceived as more likable than the other four vowels. These findings indicate that the mouth shapes producing vowels imply specific meanings. Our study provides clues about the meaning of verbal sounds and what the facial expressions in communication represent to the perceiver.

12.
Front Psychol ; 9: 672, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29896135

RESUMO

Accurately gauging the emotional experience of another person is important for navigating interpersonal interactions. This study investigated whether perceivers are capable of distinguishing between unintentionally expressed (genuine) and intentionally manipulated (posed) facial expressions attributed to four major emotions: amusement, disgust, sadness, and surprise. Sensitivity to this discrimination was explored by comparing unstaged dynamic and static facial stimuli and analyzing the results with signal detection theory. Participants indicated whether facial stimuli presented on a screen depicted a person showing a given emotion and whether that person was feeling a given emotion. The results showed that genuine displays were evaluated more as felt expressions than posed displays for all target emotions presented. In addition, sensitivity to the perception of emotional experience, or discriminability, was enhanced in dynamic facial displays, but was less pronounced in the case of static displays. This finding indicates that dynamic information in facial displays contributes to the ability to accurately infer the emotional experiences of another person.

13.
Front Psychol ; 8: 633, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28522979

RESUMO

While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...